Module & Runtime Brainstorming
What kind of modules can we build for the Station?
- One-off computations
- A general benchmarking tool to measure the performance of CDN networks: Saturn, Meson, Titan, Cloudflare, IPFS Gateway, etc.
- Benchmarking module sending requests to Storage Providers - package DealBot as a module.
- Ask the Station network to run an experiment (e.g. measure the performance of CID retrieval from Saturn vs. IPFS Gateway and report the results back). The requestor can pay Station operators.
- A module to request these jobs.
- Compute over data (based on IPFS COD)
- Generic compute over data module (a job scheduler) that can support different kinds of jobs (CDN benchmarking tool, IPFS COD, etc.)
- See
- Network acceleration
- Saturn L2 (retrieval acceleration)
- Indexer or Indexer Cache module to speed up indexing requests, we can use this internally by other Station modules too
- Other
- Data onboarding flow (web3.storage, nft.storage) - using Estuary instead of talking to SPs directly
- Parallelized uploads
Dietrich Ayala
Also, @Ilya Kreymer at Iceland we were discussing massively parallelizing and incentivizing large uploads - eg, earn FIL by running IPFS Desktop and autoexecuting upload tasks. @Julian Gruber said Saturn might be interested in doing this as part of an extensible task model in Filecoin Station desktop app. Maybe worth sharing some workloads of real data to test with.
- Streaming CAR verification for HTTP Gateway integrations
- Tools like
ffmpegandcurlhave IPFS integrations, which retrieve data by requesting via an HTTP gateway, optionally configured in~/.ipfs/gateway. See this blog post for more details. One problem these integrations have is that data isn’t getting verified! For complexity reasons there’s no full IPFS stack in these C implementation, and also not a streaming CAR validator. Station can help with this, by running a local HTTP proxy, that takes IPFS Gateway requests, rewrites them to include?format=car, streamingly validates the incoming data, and then exposes the raw chunks toffmpeg,curletc. Users would then configure something likehttps://localhost:8080in their~/.ipfs/gatewayconfig, and the rest would happen automatically in the background.
- fwiw Station can fetch raw block or a CAR and use one of preexisting JS libraries to parse thems:
https://docs.ipfs.tech/reference/http/gateway/#trusted-vs-trustless
https://www.npmjs.com/package/ipfs-car
https://www.npmjs.com/package/@ipld/car
- Serverless function invocation - the website pays directly to L2 nodes to run the code. The tricky part is how to do this reliably if a Station user closes the lid of their laptop. Maybe run multiple invocations in parallel?
- Tools like
- Zero-knowledge computation
- WeatherXM - run distributed computation over data submitted by weather stations, leverage close distance between the weather stations and L2 nodes. https://weatherxm.com - Nicolas Tsiligaridis
- Network indexer L2 cache - talk to @Will Scott and David (dvd)
- Probing - random retrievals from UPs to check retrievability. The list of L1 is static, therefore SPs can white-list them. We want to test retrievability from anonymous nodes.
- Outsorce ZK proof calculations, e.g. prove equality between IPFS CID and Filecoin storage something. Talk to @Nicola, Kuba, willscott, and the CryptoNetLab team.
- Network benchmarking
- Help ProbeLab to run their measurements of IPFS (talk to @Yiannis Psaras)
- IPFS Public Gateway checker (talk to @Russell Dempsey)
- Pingdom/Thousand Eyes clone implemented via Station jobs
- Help Slingshot with data onboarding. Bacalhau can then run computation over data that was recently onboarded via L2s
- Ship Punchr app inside Station (talk to @Dennis Trautwein)
- Livepeer - benchmark latency of video streaming from their nodes (https://livepeer.org - Yondon Fu, Shannon)
- IPFS DX and Testground - talk to @galargh and )
- https://witnesschain.com is interested in integrating their speed tests with Station
WASM runtimes
lucet- superseded by wasmtime
eos-vm - The project seems to be dead.Last commit in Nov 2019,last blog post in 2021.
faasm- Linux only, leverages Linux-specific OS features for isolation, network usage shaping, and so on.
Other projects to look at
- IPVM Working Group: https://github.com/ipvm-wg
- InterPlanetary Consensus - can help us to cluster Stations and also check their uptime
- Nimbus - Consensus & execution client (talk to the Nim team, Tanguy Cizian)
- Yatima - Yatima is a verifiable computing platform which uses formal proofs and zkSNARKs to make software safer. may know John Burnham from that company.
- Fluence - Fluence is a decentralized platform designed to enable the freedom of digital innovation via peer-to-peer applications. They provide a WASM runtime for composable modules.
- Marine - the platform:
https://fluence.dev/docs/marine-book/quick-start/develop-a-single-module-service
- Aqua - an open-source language for programming peer-to-peer scenarios:
https://fluence.dev/docs/marine-book/quick-start/develop-a-single-module-service
- Marine - the platform:
- Cloudflare is adding API to establish TCP connections - maybe we can draw some inspiration there. https://github.com/cloudflare/workerd/pull/162
- How to run Go programs via WASM in Deno: https://dev.to/taterbase/running-a-go-program-in-deno-via-wasm-2l08
- Deno provides TypeScript & WebAssembly runtime, with resource isolation (limited network to FS, network, etc.).
- AssemblyScript - https://www.assemblyscript.org
- https://chain.link - can we use it for decentralised job scheduling?
Chainlink decentralized oracle networks provide tamper-proof inputs, outputs, and computations to support advanced smart contracts on any blockchain.
- https://github.com/o1-labs/snarkyjs
Typescript/Javascript framework for zk-SNARKs and zkApps
- https://github.com/rustwasm/wasm-pack
This tool seeks to be a one-stop shop for building and working with rust- generated WebAssembly that you would like to interop with JavaScript, in the browser or with Node.js.
Excerption from the discussion with IPFS folks:
Instead of Protocol Labs hosting preloads, we could have a pool of nodes speaking this protocols, donated by organizations from the Content Adressed Alliance
Would it make sense to leverage Saturn L1 nodes (running in data centres), IPFS Desktop or Filecoin Station (apps running on consumer computers) to run these nodes?
Saturn nodes that get compensated: sure.
IPFS Desktop (and other end user clients): hard no.
If Saturn wants to join CAA and/or implement this open protocol, that is fine. Saturn nodes already operate in a way that preloads and reprovides third-party blocks, get compensated for it, know how to handle bad bits etc. So there is no increased risk here.
But IPFS Desktop users have no protection like that, no lawyers on staff. This is similar torunning Tor Exit node: only organizations that can afford it, run it. Individuals only run Relay nodes (which hide content, removing the risk)
Since IPFS started, users rely on the guarantee their node will only store and provide blocks they explicitly asked for.
This built-in user agency feature is something we should protect at all cost.
.png)